no-write allocation - definitie. Wat is no-write allocation
Diclib.com
Online Woordenboek

Wat (wie) is no-write allocation - definitie

COMPUTING COMPONENT THAT TRANSPARENTLY STORES DATA SO THAT FUTURE REQUESTS FOR THAT DATA CAN BE SERVED FASTER
Caching; Draft:Cache memory; Write-through; Write-thru; Write-back; No-write allocation; Cache miss; Cache conflict; Cache hit; Cache-Memory; Memory cache; Write through cache; Write back cache; Cash memory; Cache misses; Cacheable content; Cache Memory; Dirty cache; Dirty flag; SQL caching; Game Cache File; Lazy write; Dirty (computer science); Backing store; Result cache; Memory caching; Caching (computing); Write-around; Write-behind; Writeback; Cache memory; GPU cache; Cache hit ratio; Cache hit rate; Write-back cache; Hardware cache; Software caches; Remote memory; Remote cache; Stack cache; Copy back cache; Writethrough
  • A write-back cache with write allocation
  • A write-through cache with no-write allocation

no-write allocation         
<memory management> A cache policy where only processor reads are cached, thus avoiding the need for write-back or write-through. (1996-06-12)
Resource allocation         
ALLOCATION OF RESOURCES AMONG POSSIBLE USES
Distributed resource allocation; Allocation of resources; Resource allocation problems; Resource allocation problem; Allocation of Resources; Resource allocation mechanism; Resource Allocation; Algorithms for resource allocation
In economics, resource allocation is the assignment of available resources to various uses. In the context of an entire economy, resources can be allocated by various means, such as markets, or planning.
write-down         
REDUCTION IN RECOGNIZED VALUE OF AN ENTITY
Write down; Goodwill writedown; Written off; Write-down; Writeoff; Writedown; Writedowns; Totalled; Write off; Tax write-off; Tax writeoff
¦ noun Finance a reduction in the estimated or nominal value of an asset.

Wikipedia

Cache (computing)

In computing, a cache ( (listen) KASH) is a hardware or software component that stores data so that future requests for that data can be served faster; the data stored in a cache might be the result of an earlier computation or a copy of data stored elsewhere. A cache hit occurs when the requested data can be found in a cache, while a cache miss occurs when it cannot. Cache hits are served by reading data from the cache, which is faster than recomputing a result or reading from a slower data store; thus, the more requests that can be served from the cache, the faster the system performs.

To be cost-effective and to enable efficient use of data, caches must be relatively small. Nevertheless, caches have proven themselves in many areas of computing, because typical computer applications access data with a high degree of locality of reference. Such access patterns exhibit temporal locality, where data is requested that has been recently requested already, and spatial locality, where data is requested that is stored physically close to data that has already been requested.